Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Survey on interpretability research of deep learning
Lingmin LI, Mengran HOU, Kun CHEN, Junmin LIU
Journal of Computer Applications    2022, 42 (12): 3639-3650.   DOI: 10.11772/j.issn.1001-9081.2021091649
Abstract866)   HTML63)    PDF (4239KB)(588)       Save

In recent years, deep learning has been widely used in many fields. However, due to the highly nonlinear operation of deep neural network models, the interpretability of these models is poor, these models are often referred to as “black box” models, and cannot be applied to some key fields with high performance requirements. Therefore, it is very necessary to study the interpretability of deep learning. Firstly, deep learning was introduced briefly. Then, around the interpretability of deep learning, the existing research work was analyzed from eight aspects, including hidden layer visualization, Class Activation Mapping (CAM), sensitivity analysis, frequency principle, robust disturbance test, information theory, interpretable module and optimization method. At the same time, the applications of deep learning in the fields of network security, recommender system, medical and social networks were demonstrated. Finally, the existing problems and future development directions of deep learning interpretability research were discussed.

Table and Figures | Reference | Related Articles | Metrics